Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Semi‑supervised end‑to‑end fake speech detection method based on time‑domain waveforms
FANG Xin, HUANG Zexin, ZHANG Yuhan, GAO Tian, PAN Jia, FU Zhonghua, GAO Jianqing, LIU Junhua, ZOU Liang
Journal of Computer Applications    2023, 43 (1): 227-231.   DOI: 10.11772/j.issn.1001-9081.2021101845
Abstract442)   HTML11)    PDF (6257KB)(314)       Save
The fake speech produced by modern speech synthesis and timbre conversion systems poses a serious threat to the automatic speaker recognition system. Most of the existing fake speech detection systems perform well for the known attack types in the training process, but degrades significantly in detecting unknown attack types in practical applications. Therefore, combined with the recently proposed Dual?Path Res2Net (DP?Res2Net), a semi?supervised end?to?end fake speech detection method based on time?domain waveforms was proposed. Firstly, semi?supervised learning was adopted for domain transfer to reduce the difference of data distribution between training set and test set. Then, for feature engineering, time-domain sampling points were input into DP?Res2Net directly, which increased the local multi?scale information and made full use of the dependence between audio segments. Finally, the embedded tensors were obtained to judge fake speech from natural speech after the input features going through the shallow convolution module, feature fusion module and global average pooling module. The performance of the proposed method was evaluated on the publicly available ASVspoof 2021 Speech Deep Fake evaluation set as well as the dataset VCC (Voice Conversion Challenge). Experimental results show that the Equal Error Rate (EER) of the proposed method is 19.97%, which is 10.8% less than that of the official optimal baseline system, verifying that the semi?supervised end?to?end fake speech detection method based on time?domain waveforms is effective when recognizing unknown attacks and has higher generalization capability.
Reference | Related Articles | Metrics
Image smoothing method based on gradient surface area and sparsity constraints
LI Hui, WU Chuansheng, LIU Jun, LIU Wen
Journal of Computer Applications    2021, 41 (7): 2039-2047.   DOI: 10.11772/j.issn.1001-9081.2020081325
Abstract367)      PDF (6854KB)(188)       Save
Concerning the problems of easy loss of low-contrast edges and incomplete suppression of texture details during texture image smoothing, an image smoothing method based on gradient surface area and sparsity constraints was proposed. Firstly, the image was regarded as a two-dimensional embedded surface in three-dimensional space. On this basis, geometric characteristics of the image were analyzed and the regularization term of the gradient surface area constraint was proposed, which improves the texture suppression performance. Secondly, based on the statistical characteristics of the image, a hybrid regularization constrained image smoothing model with L 0 gradient sparseness and adaptive gradient surface area constraints was established. At last, the alternating direction multiplier method was used to solve the non-convex non-smooth optimization model efficiently. The experimental results in texture suppression, edge detection, texture enhancement and image fusion show that the proposed algorithm overcomes the defects of L 0 gradient minimization smoothing method, such as staircase effect and insufficient filtering, and is able to maintain and sharpen the significant edge contours of the image while removing a large amount of texture information.
Reference | Related Articles | Metrics
Overview of modeling method of emergency organization decision in disaster operations management
CAO Cejun, LIU Ju
Journal of Computer Applications    2020, 40 (7): 2142-2149.   DOI: 10.11772/j.issn.1001-9081.2019112015
Abstract374)      PDF (1082KB)(507)       Save
To improve the utilization rate of human resource, reduce various losses caused by disasters, and contribute to the sustainable development, to apply efficient approaches to model the decisions of emergency organization is a critical and urgently-to-be-solved issue. Firstly, the concepts and connotations of disaster operations management and emergency organization were given. Secondly, the current states of the application and research of Semantic Bill of X (S-BOX), fractal theory, organizational theory, mathematical programming, evolutionary game theory, multi-agent simulation and other methods in emergency organization decision modeling were presented respectively. Finally, the potential research directions of emergency organization decision modeling were presented based on bi-level optimization theory, multi-swarm evolutionary game, big data, digital twin and blockchain technology.
Reference | Related Articles | Metrics
Microscopic image segmentation method of C.elegans based on deep learning
ZENG Zhaoxin, LIU Jun
Journal of Computer Applications    2020, 40 (5): 1453-1459.   DOI: 10.11772/j.issn.1001-9081.2019091683
Abstract428)      PDF (2638KB)(499)       Save

To analyze the morphological parameters of Caenorhabditis elegans (C.elegans) automatedly and accurately by computers, the critical step is the segmentation of nematode body shape from the microscopic image. However, the design of C.elegans segmentation algorithm with robustness is still facing challenges because of a lot of noise in the microscopic image, the similarity between the pixels of the nematode edge with the surrounding environment, and the flagella and other attachments of the nematode body shape which need to be separated. Aiming at these problems, a method based on deep learning for nematode segmentation was proposed, in which the morphological features of nematodes were studied by training Mask Region-Convolutional Neural Network (Mask R-CNN) to realize automatic segmentation. Firstly, the high-level semantic features were combined with the low-level edge features by improving the multi-level feature pooling, and Large-Margin Softmax Loss (LMSL) algorithm was combined to improve the loss calculation. Then, the non-maximum suppression was improved. Finally, the methods such as fully connected fusion branch were added to further optimize the segmentation results. Experimental results show that compared to original Mask R-CNN, the proposed method has Average Precision (AP) increased by 4.3 percentage points, and the mean Intersection Over Union (mIOU) increased by 4 percentage points, which means that the proposed deep learning segmentation method can improve the segmentation accuracy effectively and segment the nematodes from the microscopic images more accurately.

Reference | Related Articles | Metrics
Image description generation method based on multi-spatial mixed attention
LIN Xianzao, LIU Jun, TIAN Sheng, XU Xiaokang, JIANG Tao
Journal of Computer Applications    2020, 40 (4): 985-989.   DOI: 10.11772/j.issn.1001-9081.2019091569
Abstract416)      PDF (800KB)(613)       Save
Concerning the vacancy of automatic information generation in offshore ship monitoring system,and aiming to build an intelligent ship monitoring system,an image description generation method based on multi-spatial mixed attention was proposed to describe the offshore ship images. The image description generation task is designed to let the computer describe the content of the image with words satisfying linguistics. Firstly,the multi-spatial mixed attention model was trained by the encoding features of the region of interest on the image,then the pretrained decoding model was fine-tuned by reconstructing the loss function with gradient policy,and the final model was obtained. Experimental results on MSCOCO (MicroSoft Common Objects in COntext)image description dataset show that the proposed model is better than the previous attention model on the evaluation index of image description generation,such as CIDEr score. The main content of ship image can be automatically described by the model on the self-constructed ship description dataset,demonstrating that the method can provide the data support for automatic information generation.
Reference | Related Articles | Metrics
Service composition partitioning method based on process partitioning technology
LIU Huijian, LIU Junsong, WANG Jiawei, XUE Gang
Journal of Computer Applications    2020, 40 (3): 799-805.   DOI: 10.11772/j.issn.1001-9081.2019071290
Abstract330)      PDF (843KB)(280)       Save
In order to solve the bottleneck existed in the central controller of centralized service composition, a method of constructing decentralized service composition based on process partitioning was proposed. Firstly, the business process was modeled by the type directed graph. Then, a grouping algorithm was proposed based on the graph transformation method, and the process model was partitioned according to the grouping algorithm. Finally, the decentralized service composition was constructed according to the partitioning results. Test results show that compared with single thread algorithm, the grouping algorithm has the time-consuming for model 1 reduced by 21.4%, and the decentralized service composition constructed has lower response time and higher throughput. The experimental results show that the proposed method can effectively partition the business processes in the service composition, and the constructed decentralized service composition can improve the service performance.
Reference | Related Articles | Metrics
Icing prediction of wind turbine blade based on stacked auto-encoder network
LIU Juan, HUANG Xixia, LIU Xiaoli
Journal of Computer Applications    2019, 39 (5): 1547-1550.   DOI: 10.11772/j.issn.1001-9081.2018102230
Abstract546)      PDF (630KB)(384)       Save
Aiming at the problem that wind turbine blade icing seriously affects the generating efficiency, safety and economy of wind turbines, a Stacked AutoEncoder (SAE) network based prediction model was proposed based on SCADA (Supervisory Control And Data Acquisition) data. The unsupervised method of encoding-decoding was utilized to pre-train the unlabeled dataset, and then the back propagation algorithm was utilized to train and fine tune the labeled dataset to achieve adaptive fault feature extraction and fault state classification. The complexy of the traditional prediction models was simplified effectively, and the influence of artificial feature extraction was avoided on model performance. The historical data of wind turbine No.15 collected by SCADA system was used for training and testing. The accuracy of the test results was 97.28%. Compared with the models based on Support Vector Machine (SVM) and Principal Component Analysis-Support Vector Machine (PCA-SVM), which accuracies are 91% and 93% respectively, the result indicates that the proposed model is more accurate than the other two.
Reference | Related Articles | Metrics
Passive falling detection method based on wireless channel state information
HUANG Mengmeng, LIU Jun, ZHANG Yifan, GU Yu, REN Fuji
Journal of Computer Applications    2019, 39 (5): 1528-1533.   DOI: 10.11772/j.issn.1001-9081.2018091938
Abstract406)      PDF (931KB)(281)       Save
Traditional vision-based or sensor-based falling detection systems possess certain inherent shortcomings such as hardware dependence and coverage limitation, hence Fallsense, a passive falling detection method based on wireless Channel State Information (CSI) was proposed. The method was based on low-cost, pervasive and commercial WiFi devices. Firstly, the wireless CSI data was collected and preprocessed. Then a model of motion-signal analysis was built, where a lightweight dynamic template matching algorithm was designed to detect relevant fragments of real falling events from the time-series channel data in real time. Experiments in a large number of actual environments show that Fallsense can achieve high accuracy and low false positive rate, with an accuracy of 95% and a false positive rate of 2.44%. Compared with the classic WiFall system, Fallsense reduces the time complexity from O( mN log N) to O( N) ( N is the sample number, m is the feature number), and increases the accuracy by 2.69%, decreases the false positive rate by 4.66%. The experimental results confirm that this passive falling detection method is fast and efficient.
Reference | Related Articles | Metrics
Foreground detection with weighted Schatten- p norm and 3D total variation
CHEN Lixia, LIU Junli, WANG Xuewen
Journal of Computer Applications    2019, 39 (4): 1170-1175.   DOI: 10.11772/j.issn.1001-9081.2018092038
Abstract420)      PDF (811KB)(232)       Save
In view of the fact that the low rank and sparse methods generally regard the foreground as abnormal pixels in the background, which makes the foreground detection precision decrease in the complex scene, a new foreground detection method combining weighted Schatten- p norm with 3D Total Variation (3D-TV) was proposed. Firstly, the observed data were divided into low rank background, moving foreground and dynamic disturbance. Then 3D total variation was used to constrain the moving foreground and strengthen the prior consideration of the spatio-temporal continuity of the foreground objects, effectively suppressing the random disturbance of the anomalous pixels in the discontinuous dynamic background. Finally, the low rank performance of video background was constrained by weighted Schatten- p norm to remove noise interference. The experimental results show that, compared with Robust Principal Component Analysis (RPCA), Higher-order RPCA (HoRPCA) and Tensor RPCA (TRPCA), the proposed model has the highest F-measure value, and the optimal or sub-optimal values of recall and precision. It can be concluded that the proposed model can better overcome the interference in complex scenes, such as dynamic background and severe weather, and its extraction accuracy as well as visual effect of moving objects is improved.
Reference | Related Articles | Metrics
Ship behavior recognition method based on multi-scale convolution
WANG Lilin, LIU Jun
Journal of Computer Applications    2019, 39 (12): 3691-3696.   DOI: 10.11772/j.issn.1001-9081.2019050896
Abstract415)      PDF (947KB)(369)       Save
The ship behavior recognition by human supervision in complex marine environment is inefficient. In order to solve the problem, a new ship behavior recognition method based on multi-scale convolutional neural network was proposed. Firstly, massive ship driving data were obtained from the Automatic Identification System (AIS), and the discriminative ship behavior trajectories were extracted. Secondly, according to the characteristics of the trajectory data, the behavior recognition network for ship trajectory data was designed and implemented by multi-scale convolution, and the feature channel weighting and Long Short-Term Memory network (LSTM) were used to improve the accuracy of algorithm. The experimental results on ship behavior dataset show that, the proposed recognition network can achieve 92.1% recognition accuracy for the ship trajectories with specific length, which is 5.9 percentage points higher than that of the traditional convolutional neural network. In addition, the stability and convergence speed of the proposed network are significantly improved. The proposed method can effectively improve the ship behavior recognition accuracy, and provide efficient technical support for the marine regulatory authority.
Reference | Related Articles | Metrics
Imperialist competitive algorithm based on multiple search strategy for solving traveling salesman problem
CHEN Menghui, LIU Junlin, XU Jianfeng, LI Xiangjun
Journal of Computer Applications    2019, 39 (10): 2992-2996.   DOI: 10.11772/j.issn.1001-9081.2019030434
Abstract320)      PDF (802KB)(231)       Save
The imperialist competitive algorithm is a swarm intelligence optimization algorithm with strong local search ability, but excessive local search will lead to the loss of diversity and fall into local optimum. Aiming at this problem, an Imperialist Competitive Algorithm based on Multiple Search Strategy (MSSICA) was proposed. The country was defined as a feasible solution and the kingdoms were defined as four mechanisms of combinatorial artificial chromosome with different characteristics. The block mechanism was used to retain the dominant solution fragment during search and differentiated mechanisms of combinatorial artificial chromosome was used for different empires to search the effective and feasible solution information of different solution spaces. When it come to the local optimum, the multiple search strategy was used to inject a uniformly distributed feasible solution to replace a less advantageous solution to enhance the diversity. Experimental results show that the multiple search strategy can effectively improve diversity of the imperialist competitive algorithm and improve the quality and stability of the solution.
Reference | Related Articles | Metrics
Efficient certificateless aggregate signcryption scheme without bilinear pairings
SU Jingfeng, LIU Juxia
Journal of Computer Applications    2018, 38 (2): 374-378.   DOI: 10.11772/j.issn.1001-9081.2017081984
Abstract582)      PDF (924KB)(488)       Save
Most of the current aggregate signcryption schemes based on bilinear pairings have low computational efficiency, thus they are not suitable for the application environment with limited computing resources and communication bandwidth. In order to improve the efficiency of aggregate signcryption, a new certificateless aggregate signcryption scheme without bilinear pairings was proposed. Based on Diffie-Hellman problem and Discrete Logarithm Problem (DLP), it was proven to be existentially unforgeable and confidential under the random oracle model. Compared with the current typical aggregation signcryption schemes, the proposed scheme has not bilinear pairings and exponential computation, and only needs two point multiplications in the single signcryption. Therefore, it has higher efficiency and shorter length of ciphertext. In the aggregate signcryption verification phase, there is no need to provide any user's secret information, so the proposed scheme has the public verifiability property. In addition, the proposed scheme does not need a secure channel in the partial private key generation phase, which reduces communication complexity.
Reference | Related Articles | Metrics
Continuous ultrasound image set segmentation method based on support vector machine
LIU Jun, LI Pengfei
Journal of Computer Applications    2017, 37 (7): 2089-2094.   DOI: 10.11772/j.issn.1001-9081.2017.07.2089
Abstract550)      PDF (1199KB)(443)       Save
A novel Support Vector Machine (SVM)-based unified segmentation model was proposed for segmenting a continuous ultrasound image set, because the traditional SVM-based segmenting method needed to extract sample points for each image to create a segmentation model. Firstly, the gray feature was extracted from the gray histogram of the image as the characteristic representing the continuity of the image in the image set. Secondly, part images were selected as the samples and the gray feature of each pixel was extracted. Finally, the gray feature of the pixel was combined with the feature of image sequence continuity in the image where each pixel was located. The SVM was used to train the segmentation model to segment the whole image set. The experimental results show that compared with the traditional SVM-based segmentation method, the new model can greatly reduce the workload of manually selecting the sample points when segmenting the image set with large quantity and continuous variation and guarantees the segmentation accuracy simultaneously.
Reference | Related Articles | Metrics
Improved robust OctoMap based on full visibility model
LIU Jun, YUAN Peiyan, LI Yongfeng
Journal of Computer Applications    2017, 37 (5): 1445-1450.   DOI: 10.11772/j.issn.1001-9081.2017.05.1445
Abstract609)      PDF (895KB)(423)       Save
An improved robust OctoMap based on full visibility model was proposed to meet accuracy needs of 3D map for mobile robot autonomous navigation and it was applied to the RGB-D SLAM (Simultaneous Localization And Mapping) based on Kinect. First of all, the connectivity was judged by considering the the relative positional relationship between the camera and the target voxel and the map resolution to get the number and the location of adjacent voxels which met connectivity. Secondly, according to the different connectivity, the visibility model of the target voxel was built respectively to establish the full visibility model which was more universal. The proposed model could effectively overcome the limitations of the robust OctoMap visibility model, and improve the accuracy. Next, the simple depth error model was replaced by the Kinect sensor depth error model based on Gaussian mixture model to overcome the effect of the sensor measurement error on the accuracy of map further and reduce the uncertainty of the map. Finally, the Bayesian formula and linear interpolation algorithm were combined to update the occupancy probability of each node in the octree to build the volumetric occupancy map based on a octree. The experimental results show that the proposed method can effectively overcome the influence of Kinect sensor depth error on map precision and reduce the uncertainty of the map, and the accuracy of map is improved obviously compared with the robust OctoMap.
Reference | Related Articles | Metrics
Rock classification of multi-feature fusion based on collaborative representation
LIU Juexian, TENG Qizhi, WANG Zhengyong, HE Xiaohai
Journal of Computer Applications    2016, 36 (3): 854-858.   DOI: 10.11772/j.issn.1001-9081.2016.03.854
Abstract483)      PDF (754KB)(454)       Save
To solve the issues of time-consuming and low recognition rate in the traditional component analysis of rock slices, a method of component analysis of rock slices based on Collaborative Representation (CR) was proposed. Firstly, texture feature of grain in rock slices was discussed, and the way of combining Hierarchical Multi-scale Local Binary Pattern (HMLBP) and Gray Level Co-occurrence Matrix (GLCM) was proved to characterize the texture of grain in rock slices well. Then, in order to reduce the time complexity of classification, the dimension of new features was reduced to 100 by using Principal Component Analysis (PCA). Finally, the Collaborative Representation based Classification (CRC) was used as the classifier. Differ to Sparse Representation based Classification (SRC), prediction samples were encoded by all the samples in train dictionary collaboratively instead of some single sample alone. Same attributes of different samples can improve the recognition rate. The experimental results show that the recognition speed of the method increases by 300% and the recognition rate of the method increases by 2% compared to SRC. In practical application, it can distinguish quartz and feldspar components in rock slices well.
Reference | Related Articles | Metrics
Fuzzy Chinese character recognition of license plate based on histogram of oriented gradients and Gaussian pyramid
LIU Jun, BAI Xue
Journal of Computer Applications    2016, 36 (2): 586-590.   DOI: 10.11772/j.issn.1001-9081.2016.02.0586
Abstract583)      PDF (832KB)(1026)       Save
Concerning the low recognition rate of fuzzy license plate in the existing license plate recognition method, a new license plate recognition algorithm combined with Gaussian pyramid and Histogram of Oriented Gradients (HOG) was proposed. Firstly, by utilizing the multi-scale expression of Gaussian pyramid, a two-layer Gaussian pyramid model was established for fuzzy Chinese character in license plate. Details about the fuzzy characters were described in the first layer. The second layer was obtained by taking smooth processing and down sampling on the first layer, and the main feature was highlighted by describing details of the fuzzy characters. By extracting HOG from two-layer Gaussian pyramid, the characteristic dimension of image was expanded and the ability of recognizing fuzzy Chinese characters was enhanced. Finally, fuzzy Chinese character in license plate was recognized by the Back Propagation (BP) neural network classifier. The simulation result shows that the recognition rate of the proposed method is higher than that of HOG feature method and K-L (Karhunen-Loeve) transform method in the same sample space, it means that the proposed method can improve the effective recognition rate of fuzzy Chinese characters in video surveillance.
Reference | Related Articles | Metrics
Mesh simplification algorithm combined with edge collapse and local optimization
LIU Jun, FAN Hao, SUN Yu, LU Xiangyan, LIU Yan
Journal of Computer Applications    2016, 36 (2): 535-540.   DOI: 10.11772/j.issn.1001-9081.2016.02.0535
Abstract494)      PDF (927KB)(998)       Save
Aiming at the problems that the detail features of the mesh models are lost and the qualities of meshes are bad when the three-dimensional models are simplified to a lower resolution with the mesh simplification algorithms, a high quality mesh simplification algorithm was proposed based on feature preserving. By introducing the concept of approximate curvature of the vertex and combining it with the error matrix of the edge collapse in the algorithm, the detail features of the simplified model were reserved to a great extent. At the same time, by analyzing the quality of simplified triangular mesh, optimizing triangular mesh locally, reducing the amount of narrow triangles, the quality of simplified model was improved. The proposed algorithm was tested on Apple model and Horse model, and compared with two algorithms, one of them is a classical mesh simplification algorithm based on edge collapse, the other is an improved algorithm of the classical one. The experimental results show that when the models are simplified to a lower resolution, the triangular meshes of two contrast algorithms are too evenly distributed, and the local details are not clear, while the triangular meshes of the proposed algorithm are intensive in the areas with large curvature but sparse in the flat areas, and the local details are legible. Compared with the two contrast algorithms, the geometric errors of the simplified model in the proposed algorithm are of the same magnitude; the average qualities of the simplified meshes in the proposed algorithm are much higher than those of two contrast algorithms. The results verify that not only the proposed algorithm can efficiently maintain the detail features of the original model, but also the simplified model has high quality and looks better.
Reference | Related Articles | Metrics
Construction of protein-compound interactions model
LI Huaisong YUAN Qin WANG Caihua LIU Juan
Journal of Computer Applications    2014, 34 (7): 2129-2131.   DOI: 10.11772/j.issn.1001-9081.2014.07.2129
Abstract150)      PDF (586KB)(396)       Save

Building an interpretable and large-scale protein-compound interactions model is an very important subject. A new chemical interpretable model to cover the protein-compound interactions was proposed. The core idea of the model is based on the hypothesis that a protein-compound interaction can be decomposed as protein fragments and compound fragments interactions, so composing the fragments interactions brings about a protein-compound interaction. Firstly, amino acid oligomer clusters and compound substructures were applied to describe protein and compound respectively. And then the protein fragments and the compound fragments were viewed as the two parts of a bipartite graph, fragments interactions as the edges. Based on the hypothesis, the protein-compound interaction is determined by the summation of protein fragments and compound fragments interactions. The experiment demonstrates that the model prediction accuracy achieves 97% and has the very good explanatory.

Reference | Related Articles | Metrics
Visualization of multi-valued attribute association rules based on concept lattice
GUO Xiaobo ZHAO Shuliang ZHAO Jiaojiao LIU Jundan
Journal of Computer Applications    2013, 33 (08): 2198-2203.  
Abstract792)      PDF (1159KB)(477)       Save
Considering the problems caused by the traditional association rules visualization approaches, including being unable to display the frequent pattern and relationships of items, unitary express, especially being not conducive to represent multi-schema association rules, a new visualizing algorithm for multi-valued association rules mining was proposed. It introduced the redefinition and classification of multi-valued attribute data by using conceptual lattice and presented the multi-valued attribute items of frequent itemset and association rules with concept lattice structure. This methodology was able to achieve frequent itemset visualization and multi-schema visualization of association rules, including the type of one to one, one to many, many to one, many to many and concept hierarchy. At last, the advantages of these new methods were illustrated with the help of experimental data obtained from demographic data of a province, and the source data visualization, frequent pattern and association relation visual representation of the demographic data were also achieved. The practical application analysis and experimental results prove that the schema has more excellent visual effects for frequent itemset display and authentical multi-schema association rules visualization.
Reference | Related Articles | Metrics
Metagraph for genealogical relationship visualization
LIU Jundan ZHAO Shuliang ZHAO Jiaojiao GUO Xiaobo CHEN Min LIU Mengmeng
Journal of Computer Applications    2013, 33 (07): 2037-2040.   DOI: 10.11772/j.issn.1001-9081.2013.07.2037
Abstract785)      PDF (657KB)(509)       Save
For the poor readability and understandability with the existing display form for genealogical data, this paper presented visualization for genealogical data with metagraph. In the metagraph representation of genealogy, the generating set comprised of all persons in the family; each edge represented only "parents-child" relationship. An edge in the metagraph representation of genealogy was a pair consisting of an invertex and an outvertex, the invertex consisted of two nodes of the marital relationship, and the outvertex represented a single child node set. The experimental results show that the number of the edges in the metagraph form is almost half of common form in the case of the same data, and the visualizing effect is significantly improved. At the same time, the proposed methodology has a guiding role in the mathematical modeling of genealogy, the research of genealogy visualization and the improvement of genealogical information system.
Reference | Related Articles | Metrics
Security improvement on LAOR routing protocol
ZHOU Xing LIU Jun DONG Chundong ZHANG Yujing
Journal of Computer Applications    2013, 33 (06): 1619-1629.   DOI: 10.3724/SP.J.1087.2013.01619
Abstract597)      PDF (675KB)(639)       Save
This paper referred to the common routing protocol threats appearing in MANET and analyzed the properties of satellite network to get possible security threats of Location-Assisted On-demand Routing (LAOR) and what to do to make the protocol safer. It used Id-based cryptography to mainly realize mutual authentication between nodes and protection of the integrity of routing control packet by signature using each nodes private key. Finally strand space was used to analyze the improved routing protocol, and proved it satisfied plausible routing, and was secure.
Reference | Related Articles | Metrics
Feature-retained image de-noising via sparse representation
MA Lu DENG Chengzhi WANG Shengqian LIU Juanjuan
Journal of Computer Applications    2013, 33 (05): 1416-1419.   DOI: 10.3724/SP.J.1087.2013.01416
Abstract900)      PDF (650KB)(585)       Save
According to the theory of sparse representation, images can be sparse-represented by using an appropriately redundant dictionary. The completeness can enable using very few big coefficients to capture the important information of images, and cause more robust to noise. Regarding image de-noising, considering the human visual characteristics, this paper studied the effective representation of characteristics and edge information of noisy image based on complete dictionary. For more effective feature retaining of images, a method of feature-retaining de-noising via sparse representation was proposed, which made the Structural SIMilarity (SSIM) as fidelity measure of the information. The experimental results indicate that the proposed algorithm has a better efficiency of de-noising, enhances the capacity of retaining feature, and gets a better visual effect of de-noised image.
Reference | Related Articles | Metrics
Promotion-pricing decision and endogenous timing in a supply chain
LIU Jun TAN Deqing
Journal of Computer Applications    2013, 33 (04): 971-975.   DOI: 10.3724/SP.J.1087.2013.00971
Abstract635)      PDF (755KB)(508)       Save
To get endogenous timing in a supply chain, a promotion-pricing game model was established in a supply chain with two manufacturers and one retailer. The effects of product substitutability and promotional efficiency on promotion-pricing strategies and endogenous timing were analyzed. The effect of cost difference on member decision and endogenous timing was explored through numerical simulations. It is found that the level of supply chain coordination is improved due to an increase in promotional efficiency of famous brand. Overall endogenous timing cannot be changed by cost difference, and only regional area gets influenced. If the action timing of participants in the game is arbitrarily assumed by a researcher, wrong conclusions may be obtained by him.
Reference | Related Articles | Metrics
Study on relationship between system matrix and reconstructed image quality in iterative image reconstruction
CHEN Honglei HE Jianfeng LIU Junqing MA Lei
Journal of Computer Applications    2013, 33 (01): 53-56.   DOI: 10.3724/SP.J.1087.2013.00053
Abstract1062)      PDF (759KB)(653)       Save
In view of complicated and inefficient calculation of system matrix, a simple length weighted algorithm was proposed. Compared with the traditional length weighted algorithm, the proposed algorithm reduced situations of the intercepted photon rays with the grid and the grid index of the proposed approach was determined in the two-dimensional coordinate. The computational process of the system matrix was improved based on the proposed algorithm. The image reconstructed with the system matrix was constructed through the new process, and the quality of the reconstructed image was assessed. The experimental results show that the operation speed of the proposed algorithm is more than three times faster than Siddon improved algorithm, and the more lengths in the length weighted algorithm get considered, the better quality of the reconstructed image has.
Reference | Related Articles | Metrics
Dense noise face recognition based on sparse representation and algorithm optimization
CAI Ti-jian FAN Xiao-ping LIU Jun-xiong
Journal of Computer Applications    2012, 32 (08): 2313-2319.   DOI: 10.3724/SP.J.1087.2012.02313
Abstract966)      PDF (611KB)(377)       Save
To improve the speed and anti-noise performance of face recognition based on sparse representation, the Cross-And-Bouquet (CAB) model and Compressed Sensing (CS) reconstruction algorithm were studied. Concerning the large matrix inversion of reconstruction algorithm, a Fast Orthogonal Matching Pursuit (FOMP) algorithm was proposed. The proposed algorithm could convert the high complexity operations of matrix inversion into the lightweight operation of vector-matrix computation. To increase the amount of effective information in dense noise pictures, several practical and efficient methods were put forward. The experimental results verify that these methods can effectively improve the face recognition rate in dense noise cases, and identifiable noise ratio can reach up to 75%. These methods are of practical values.
Reference | Related Articles | Metrics
Low complexity partial transmit sequence algorithm and realization on field programmable gate array
LIU Jun-jun YUAN Zhu MA Teng ZHOU Jian-hong
Journal of Computer Applications    2011, 31 (12): 3226-3229.  
Abstract1013)      PDF (601KB)(505)       Save
The conventional Partial Transmit Sequence (PTS) approaches get high computational complexity and need to transmit side information, which is difficult for hardware implementation. Concerning these problems, this paper proposed an algorithm of using m sequences as phase rotation factors and transferring them by pilot information. The m sequence can reduce the complexity of Field Programmable Gate Array (FPGA) implementation and the pilot transferring phase rotation factor need no side information. The Matlab simulation proves the algorithm is effective. Meanwhile, a Peak-to-Average Power Ratio (PAPR) suppression module was designed to be implemented on FPGA, and the results show that this module not only reduces the complexity of OFDM systems, but also works well in PAPR suppression.
Related Articles | Metrics
Image interpolation algorithm based on edge degree
KONG Fanting LIU Junhua
Journal of Computer Applications    2011, 31 (06): 1585-1587.   DOI: 10.3724/SP.J.1087.2011.01585
Abstract1421)      PDF (488KB)(449)       Save
In order to eliminate the phenomenon of contour jaggies in conventional image interpolation schemes, an edge-degree-based image interpolation method was proposed. Under the smooth contours assumption, the method considered the smoothly varying characteristics of nature image contours, and utilized the information of edge degree to suppress contour jaggies. Simulation results show that the proposed method produces fewer jaggies, and it doesn't introduce other obvious artifacts. Furthermore, the method still preserves the simplicity and efficiency of conventional image interpolation methods, and is easy for implementation and application.
Related Articles | Metrics
TCP congestion avoidance algorithm based on adaptive congestion window
LIU Jun
Journal of Computer Applications    2011, 31 (06): 1472-1475.   DOI: 10.3724/SP.J.1087.2011.01472
Abstract1147)      PDF (554KB)(555)       Save
Concerning the unsmooth growth of congestion window at congestion avoidance phase of TCP Reno, the traditional AIMD algorithm was researched and an improved congestion avoidance algorithm was proposed: a logarithmic function based on the growth of congestion window was adopted. In this new algorithm, additive factor is usually increased quickly when the network is doing well and less so when network situation is getting to congestion. The mathematical analysis shows the feasibility of the new algorithm, and its throughput, fairness and friendliness were evaluated by NS simulation. The simulation results show the effectiveness of the algorithm.
Related Articles | Metrics
Fairness improvement of PayWord protocol based on concurrent signature
Liu jun
Journal of Computer Applications    2010, 30 (06): 1493-1494.  
Abstract1279)      PDF (486KB)(1413)       Save
In consideration of efficiency and cost, PayWord protocol lacks fairness. A new solution based on PayWord protocol was proposed to protect the payment commitment of consumer and the service commitment of provider with concurrent signature, so as to enhance the fairness of PayWord protocol. The analysis results show that the new solution can better meet the requirements of micro-payment for efficiency and fairness.
Related Articles | Metrics
SAR images screening based on bit-plane characteristics
Can-bin HU Fang LIU Jun-hong ZHOU
Journal of Computer Applications    2009, 29 (11): 3021-3026.  
Abstract1589)      PDF (2481KB)(1198)       Save
In order to obtain the SAR images which include the typical target of interest, a new method of SAR images screening based on bit-plane characteristics was proposed according to the imaging characteristic of target. Based on the suitable gray pretreatment to the images, the target’s prior knowledge was analyzed, the significant bit-plane image was paid attention by the measurement of bit-plane complexity, run length and frequency spectrum. And then SAR images were screened combined with the gray histogram features. Around the airport SAR images, experiment shows that the method can screen the images rapidly. Besides, the airport target is extracted successfully, which can satisfy the requirements.
Related Articles | Metrics